36 research outputs found

    Direct neural-network hardware-implementation algorithm

    Get PDF
    An algorithm for compact neural network hardware implementation is presented, which exploits special properties of the Boolean functions describing the operation of artificial neurones with step activation function. The algorithm contains three steps: ANN mathematical model digitisation, conversion of the digitised model into a logic gate structure, and hardware optimisation by elimination of redundant logic gates. A set of C++ programs automates algorithm implementation, generating optimised VHDL code. This strategy bridges the gap between ANN design software and hardware design packages (Xilinx). Although the method is directly applicable only to neurones with step activation functions, it can be extended to sigmoidal functions

    Fire detection of Unmanned Aerial Vehicle in a Mixed Reality-based System

    Get PDF
    This paper proposes the employment of a low-cost Micro-electro-mechanical system including; inertial measurement unit (IMU), a consumer-grade digital camera and a fire detection algorithm with a nano unmanned aerial vehicle for inspection application. The video stream (monocular camera) and navigation data (IMU) rely on state-of-the-art indoor/outdoor navigation system. The system combines robotic operating system and computer vision techniques to render metric scale of monocular vision and gravity observable to provide robust, accurate and novel inter-frame motion estimates. The collected onboard data are communicated to the ground station and processed using a Simultaneous Localisation and Mapping (SLAM) system. A robust and efficient re-localisation SLAM was performed to recover from tracking failure, motion blur and frame lost in the received data. The fire detection algorithm was deployed based on the colour, movement attributes, temporal variation of fire's intensity and its accumulation around a point. A cumulative time derivative matrix was used to detect areas with fire's high-frequency luminance flicker (random characteristic) to analyse the frame-by-frame changes. We considered colour, surface coarseness, boundary roughness and skewness features while the quadrotor flies autonomously within clutter and congested areas. Mixed Reality system was adopted to visualise and test the proposed system in a physical/virtual environment. The results showed that the UAV could successfully detect fire and flame, fly towards and hover around it, communicate with the ground station and generate SLAM system

    A Novel Development of Acoustic SLAM

    Get PDF
    This paper will explore and develop on the novel idea of using acoustics to map and navigate indoor environments. The system requirements, modelling and evaluation are addressed, alongside the design and development process, testing methods, desired outcomes and practical applications. Previous work carried out in this field demonstrates that it is possible to use first order echoes to map a room. The current paper is reporting on initial research to further develop such algorithms into a simultaneous localization and mapping algorithm, having the capability to not only map rooms with sound but to also navigate rooms as well. Such novel system is intended to help visually impaired people to navigate rooms by making use of sounds and their echoes, thus `listening' their way into navigating through a room. The paper overviews the approach taken towards developing a navigation algorithm using sound, as well as the associated modelling, simulation and testing strategies enabling the desired outcomes of this type of system

    An Intelligent Computer-Aided Scheme for Classifying Multiple Skin Lesions

    Get PDF
    Skin diseases cases are increasing on a daily basis and are difficult to handle due to the global imbalance between skin disease patients and dermatologists. Skin diseases are among the top 5 leading cause of the worldwide disease burden. To reduce this burden, computer-aided diagnosis systems (CAD) are highly demanded. Single disease classification is the major shortcoming in the existing work. Due to the similar characteristics of skin diseases, classification of multiple skin lesions is very challenging. This research work is an extension of our existing work where a novel classification scheme is proposed for multi-class classification. The proposed classification framework can classify an input skin image into one of the six non-overlapping classes i.e., healthy, acne, eczema, psoriasis, benign and malignant melanoma. The proposed classification framework constitutes four steps, i.e., pre-processing, segmentation, feature extraction and classification. Different image processing and machine learning techniques are used to accomplish each step. 10-fold cross-validation is utilized, and experiments are performed on 1800 images. An accuracy of 94.74% was achieved using Quadratic Support Vector Machine. The proposed classification scheme can help patients in the early classification of skin lesions.</p

    Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Get PDF
    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.The research was supported by MRC grant G0701870 and the Vision and Eye Research Unit (VERU), Postgraduate Medical Institute at Anglia Ruskin University.This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.3758/s13414-015-1015-

    Mobile-based Skin Lesions Classification Using Convolution Neural Network

    Get PDF
    This research work is aimed at investing skin lesions classification problem using Convolution Neural Network (CNN) using cloud-server architecture. Using the cloud services and CNN, a real-time mobile-enabled skin lesions classification expert system “i-Rash” is proposed and developed. i-Rash aimed at early diagnosis of acne, eczema and psoriasis at remote locations. The classification model used in the “i-Rash” is developed using the CNN model “SqueezeNet”. The transfer learning approach is used for training the classification model and model is trained and tested on 1856 images. The benefit of using SqueezeNet results in a limited size of the trained model i.e. only 3 MB. For classifying new image, cloud-based architecture is used, and the trained model is deployed on a server. A new image is classified in fractions of seconds with overall accuracy, sensitivity and specificity of 97.21%, 94.42% and 98.14% respectively. i-Rash can serve in initial classification of skin lesions, hence, can play a very important role early classification of skin lesions for people living in remote areas

    An adaptive self-organizing fuzzy logic controller in a serious game for motor impairment rehabilitation

    Get PDF
    Rehabiliation robotics combined with video game technology provides a means of assisting in the rehabilitation of patients with neuromuscular disorders by performing various facilitation movements. The current work presents ReHabGame, a serious game using a fusion of implemented technologies that can be easily used by patients and therapists to assess and enhance sensorimotor performance and also increase the activities in the daily lives of patients. The game allows a player to control avatar movements through a Kinect Xbox, Myo armband and rudder foot pedal, and involves a series of reach-grasp-collect tasks whose difficulty levels are learnt by a fuzzy interface. The orientation, angular velocity, head and spine tilts and other data generated by the player are monitored and saved, whilst the task completion is calculated by solving an inverse kinematics algorithm which orientates the upper limb joints of the avatar. The different values in upper body quantities of movement provide fuzzy input from which crisp output is determined and used to generate an appropriate subsequent rehabilitation game level. The system can thus provide personalised, autonomously-learnt rehabilitation programmes for patients with neuromuscular disorders with superior predictions to guide the development of improved clinical protocols compared to traditional theraputic activities

    The accuracy of auditory spatial judgments in the visually impaired is dependent on sound source distance

    Get PDF
    Funder: This research was supported by the Vision and Eye Research Institute, School of Medicine at Anglia Ruskin University.Abstract: Blindness leads to substantial enhancements in many auditory abilities, and deficits in others. It is unknown how severe visual losses need to be before changes in auditory abilities occur, or whether the relationship between severity of visual loss and changes in auditory abilities is proportional and systematic. Here we show that greater severity of visual loss is associated with increased auditory judgments of distance and room size. On average participants with severe visual losses perceived sounds to be twice as far away, and rooms to be three times larger, than sighted controls. Distance estimates for sighted controls were most accurate for closer sounds and least accurate for farther sounds. As the severity of visual impairment increased, accuracy decreased for closer sounds and increased for farther sounds. However, it is for closer sounds that accurate judgments are needed to guide rapid motor responses to auditory events, e.g. planning a safe path through a busy street to avoid collisions with other people, and falls. Interestingly, greater visual impairment severity was associated with more accurate room size estimates. The results support a new hypothesis that crossmodal calibration of audition by vision depends on the severity of visual loss

    Partial visual loss disrupts the relationship between judged room size and sound source distance.

    Get PDF
    Funder: Vision and Eye Research Institute, School of Medicine, Faculty of Health, Education, Medicine and Social Care, Anglia Ruskin University.Visual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants

    Comparison of auditory spatial bisection and minimum audible angle in front, lateral, and back space

    Get PDF
    Abstract: Although vision is important for calibrating auditory spatial perception, it only provides information about frontal sound sources. Previous studies of blind and sighted people support the idea that azimuthal spatial bisection in frontal space requires visual calibration, while detection of a change in azimuth (minimum audible angle, MAA) does not. The influence of vision on the ability to map frontal, lateral and back space has not been investigated. Performance in spatial bisection and MAA tasks was assessed for normally sighted blindfolded subjects using bursts of white noise presented frontally, laterally, or from the back relative to the subjects. Thresholds for both tasks were similar in frontal space, lower for the MAA task than for the bisection task in back space, and higher for the MAA task in lateral space. Two interpretations of the results are discussed, one in terms of visual calibration and the use of internal representations of source location and the other based on comparison of the magnitude or direction of change of the available binaural cues. That bisection thresholds were increased in back space relative to front space, where visual calibration information is unavailable, suggests that an internal representation of source location was used for the bisection task
    corecore